2 research outputs found

    Invading The Integrity of Deep Learning (DL) Models Using LSB Perturbation & Pixel Manipulation

    Get PDF
    The use of deep learning (DL) models for solving classification and recognition-related problems are expanding at an exponential rate. However, these models are computationally expensive both in terms of time and resources. This imposes an entry barrier for low-profile businesses and scientific research projects with limited resources. Therefore, many organizations prefer to use fully outsourced trained models, cloud computing services, pre-trained models are available for download and transfer learning. This ubiquitous adoption of DL has unlocked numerous opportunities but has also brought forth potential threats to its prospects. Among the security threats, backdoor attacks and adversarial attacks have emerged as significant concerns and have attracted considerable research attention in recent years since it poses a serious threat to the integrity and confidentiality of the DL systems and highlights the need for robust security mechanisms to safeguard these systems. In this research, the proposed methodology comprises two primary components: backdoor attack and adversarial attack. For the backdoor attack, the Least Significant Bit (LSB) perturbation technique is employed to subtly alter image pixels by flipping the least significant bits. Extensive experimentation determined that 3-bit flips strike an optimal balance between accuracy and covertness. For the adversarial attack, the Pixel Perturbation approach directly manipulates pixel values to maximize misclassifications, with the optimal number of pixel changes found to be 4-5. Experimental evaluations were conducted using the MNIST, Fashion MNIST, and CIFAR-10 datasets. The results showcased high success rates for the attacks while simultaneously maintaining a relatively covert profile. Comparative analyses revealed that the proposed techniques exhibited greater imperceptibility compared to prior works such as Badnets and One-Pixel attacks

    A survey on security analysis of machine learning-oriented hardware and software intellectual property

    Get PDF
    Intellectual Property (IP) includes ideas, innovations, methodologies, works of authorship (viz., literary and artistic works), emblems, brands, images, etc. This property is intangible since it is pertinent to the human intellect. Therefore, IP entities are indisputably vulnerable to infringements and modifications without the owner’s consent. IP protection regulations have been deployed and are still in practice, including patents, copyrights, contracts, trademarks, trade secrets, etc., to address these challenges. Unfortunately, these protections are insufficient to keep IP entities from being changed or stolen without permission. As for this, some IPs require hardware IP protection mechanisms, and others require software IP protection techniques. To secure these IPs, researchers have explored the domain of Intellectual Property Protection (IPP) using different approaches. In this paper, we discuss the existing IP rights and concurrent breakthroughs in the field of IPP research; provide discussions on hardware IP and software IP attacks and defense techniques; summarize different applications of IP protection; and lastly, identify the challenges and future research prospects in hardware and software IP security
    corecore